Covid-19大流行的发作使风险的心理健康带来了。社会咨询在这种环境中取得了显着意义。与一般面向目标的对话不同,患者和治疗师之间的对话是相当明暗的,尽管谈话的目标非常明显。在这种情况下,了解患者的目的在提供治疗会话中提供有效咨询方面是必要的,同样适用于对话系统。在这项工作中,我们前进是一个小小的一步,在开发精神健康咨询的自动对话系统中。我们开发一个名为HOPE的新型数据集,为咨询谈话中的对话行为分类提供平台。我们确定此类对话的要求,并提出了12个域特定的对话法(DAC)标签。我们收集12.9k的话语从youtube上公开的咨询会话视频,用DAC标签提取他们的成绩单,清洁并注释它们。此外,我们提出了一种基于变压器的架构的Sparta,具有新颖的扬声器和时间感知的语境学习,用于对话行动分类。我们的评价显示了若干基线的令人信服的表现,实现了最先进的希望。我们还通过对Sparta进行广泛的实证和定性分析来补充我们的实验。
translated by 谷歌翻译
Modern telecom systems are monitored with performance and system logs from multiple application layers and components. Detecting anomalous events from these logs is key to identify security breaches, resource over-utilization, critical/fatal errors, etc. Current supervised log anomaly detection frameworks tend to perform poorly on new types or signatures of anomalies with few or unseen samples in the training data. In this work, we propose a meta-learning-based log anomaly detection framework (LogAnMeta) for detecting anomalies from sequence of log events with few samples. LoganMeta train a hybrid few-shot classifier in an episodic manner. The experimental results demonstrate the efficacy of our proposed method
translated by 谷歌翻译
Climate change has increased the intensity, frequency, and duration of extreme weather events and natural disasters across the world. While the increased data on natural disasters improves the scope of machine learning (ML) in this field, progress is relatively slow. One bottleneck is the lack of benchmark datasets that would allow ML researchers to quantify their progress against a standard metric. The objective of this short paper is to explore the state of benchmark datasets for ML tasks related to natural disasters, categorizing them according to the disaster management cycle. We compile a list of existing benchmark datasets introduced in the past five years. We propose a web platform - NADBenchmarks - where researchers can search for benchmark datasets for natural disasters, and we develop a preliminary version of such a platform using our compiled list. This paper is intended to aid researchers in finding benchmark datasets to train their ML models on, and provide general directions for topics where they can contribute new benchmark datasets.
translated by 谷歌翻译
Cancer is one of the most challenging diseases because of its complexity, variability, and diversity of causes. It has been one of the major research topics over the past decades, yet it is still poorly understood. To this end, multifaceted therapeutic frameworks are indispensable. \emph{Anticancer peptides} (ACPs) are the most promising treatment option, but their large-scale identification and synthesis require reliable prediction methods, which is still a problem. In this paper, we present an intuitive classification strategy that differs from the traditional \emph{black box} method and is based on the well-known statistical theory of \emph{sparse-representation classification} (SRC). Specifically, we create over-complete dictionary matrices by embedding the \emph{composition of the K-spaced amino acid pairs} (CKSAAP). Unlike the traditional SRC frameworks, we use an efficient \emph{matching pursuit} solver instead of the computationally expensive \emph{basis pursuit} solver in this strategy. Furthermore, the \emph{kernel principal component analysis} (KPCA) is employed to cope with non-linearity and dimension reduction of the feature space whereas the \emph{synthetic minority oversampling technique} (SMOTE) is used to balance the dictionary. The proposed method is evaluated on two benchmark datasets for well-known statistical parameters and is found to outperform the existing methods. The results show the highest sensitivity with the most balanced accuracy, which might be beneficial in understanding structural and chemical aspects and developing new ACPs. The Google-Colab implementation of the proposed method is available at the author's GitHub page (\href{https://github.com/ehtisham-Fazal/ACP-Kernel-SRC}{https://github.com/ehtisham-fazal/ACP-Kernel-SRC}).
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Learned embeddings are widely used to obtain concise data representation and enable transfer learning between different data sets and tasks. In this paper, we present Silhouette, our approach that leverages publicly-available performance data sets to learn CPU embeddings. We show how these embeddings enable transfer learning between data sets of different types and sizes. Each of these scenarios leads to an improvement in accuracy for the target data set.
translated by 谷歌翻译
Humans have perfected the art of learning from multiple modalities through sensory organs. Despite their impressive predictive performance on a single modality, neural networks cannot reach human level accuracy with respect to multiple modalities. This is a particularly challenging task due to variations in the structure of respective modalities. Conditional Batch Normalization (CBN) is a popular method that was proposed to learn contextual features to aid deep learning tasks. This technique uses auxiliary data to improve representational power by learning affine transformations for convolutional neural networks. Despite the boost in performance observed by using CBN layers, our work reveals that the visual features learned by introducing auxiliary data via CBN deteriorates. We perform comprehensive experiments to evaluate the brittleness of CBN networks to various datasets, suggesting that learning from visual features alone could often be superior for generalization. We evaluate CBN models on natural images for bird classification and histology images for cancer type classification. We observe that the CBN network learns close to no visual features on the bird classification dataset and partial visual features on the histology dataset. Our extensive experiments reveal that CBN may encourage shortcut learning between the auxiliary data and labels.
translated by 谷歌翻译
Reinforcement learning (RL) operating on attack graphs leveraging cyber terrain principles are used to develop reward and state associated with determination of surveillance detection routes (SDR). This work extends previous efforts on developing RL methods for path analysis within enterprise networks. This work focuses on building SDR where the routes focus on exploring the network services while trying to evade risk. RL is utilized to support the development of these routes by building a reward mechanism that would help in realization of these paths. The RL algorithm is modified to have a novel warm-up phase which decides in the initial exploration which areas of the network are safe to explore based on the rewards and penalty scale factor.
translated by 谷歌翻译
VQ(供应商资格)和IOQ(安装和操作资格)审核在仓库中实施,以确保在履行网络中翻转所有设备都符合质量标准。如果在短时间内进行许多检查,则可能会跳过审核检查。此外,探索性数据分析揭示了对相同资产进行类似检查的几个实例,从而重复了这项工作。在这项工作中,通过识别相似性和重复项,将自然语言处理和机器学习应用于仓库网络的大型清单数据集,并预测具有较高传递率的非批评性数据集。该研究建议ML分类器识别具有IOQ和VQ的高传递概率的检查,并将优先级分配给检查,以便在无法执行所有检查的时间时优先考虑。这项研究建议使用基于NLP的BLAZINGTEXT分类器以高速率进行清单,这可以降低检查的10%-37%,并大大降低成本。应用的算法超过了随机森林和神经网络分类器,并在90%的曲线下达到了一个区域。由于数据不平衡,使用F1分数对模型的准确性产生了积极影响,从8%提高到75%。此外,提出的重复检测过程确定要修剪的17%可能的冗余支票。
translated by 谷歌翻译
在逻辑中使用元规则,即其内容包含其他规则的规则,最近在非单调推理的情况下引起了人们的关注:第一个逻辑形式化和有效算法来计算此类理论的(元)扩展在Olivieri等人(2021年)中提出的这项工作通过考虑悬浮方面扩展了这种逻辑框架。由此产生的逻辑不仅能够建模政策,还可以解决许多法律系统中发生的知名方面。已经研究了我们刚才提到的应用区域中使用不良逻辑(DL)对元符号建模的使用。在这一研究中,上述研究并不关注元符号的一般计算特性。这项研究以两个主要贡献填补了这一空白。首先,我们介绍并形式化了两种具有元符号的可性义能逻辑的变体,以代表(1)具有能态模态的可d不平式元理论,(2)规则之间的两种不同类型的冲突:简单的冲突可不诚实的无义冲突和谨慎的冲突,谨慎的冲突和谨慎的冲突可义的义逻辑。其次,我们推进有效算法以计算两个变体的扩展。
translated by 谷歌翻译